Current Issue : January-March Volume : 2024 Issue Number : 1 Articles : 5 Articles
The high potential for creating brain-computer interfaces (BCIs) and video games for upper limb rehabilitation has been demonstrated in recent years. In this work, we describe the implementation of a prototype BCI with feedback based on a virtual environment to control the lateral movement of a character by predicting the subject’s motor intention. The electroencephalographic signals were processed employing a Finite Impulse Response (FIR) filter, Common Spatial Patterns (CSP), and Linear Discriminant Analysis (LDA). Also, a video game was used as a virtual environment, which was written in C# on the Unity3D platform. The test results showed that the prototype implemented based on electroencephalographic signal acquisition has the potential to take on real-time applications such as avatar control or assistive devices, obtaining a maximum control time of 65 s. In addition, it was noticed that the feedback in an interface plays a crucial role, since it helps the person not only to feel motivated, but also to learn how to have a more consistent motor intention and when little calibration data is recorded, the probability that the system makes erroneous predictions increases. These results demonstrate the usefulness of the development as support for people who require some treatment in the form of upper limb motor rehabilitation, and that the use of virtual environments, such as video games, can motivate such people during the rehabilitation processes....
This retrospective study presents and summarizes our long-term efforts in the popularization of robotics, engineering, and artificial intelligence (STEM) using the NAO humanoid robot. By a conservative estimate, over a span of 8 years, we engaged at least a couple of thousand participants: approximately 70% were preschool children, 15% were elementary school students, and 15% were teenagers and adults. We describe several robot applications that were developed specifically for this task and assess their qualitative performance outside a controlled research setting, catering to various demographics, including those with special needs (ASD, ADHD). Five groups of applications are presented: (1) motor development activities and games, (2) children’s games, (3) theatrical performances, (4) artificial intelligence applications, and (5) data harvesting applications. Different cases of human–robot interactions are considered and evaluated according to our experience, and we discuss their weak points and potential improvements. We examine the response of the audience when confronted with a humanoid robot featuring intelligent behavior, such as conversational intelligence and emotion recognition. We consider the importance of the robot’s physical appearance, the emotional dynamics of human–robot engagement across age groups, the relevance of non-verbal cues, and analyze drawings crafted by preschool children both before and after their interaction with the NAO robot....
A brain–computer interface (BCI) is a computer-based system that allows for communication between the brain and the outer world, enabling users to interact with computers using neural activity. This brain signal is obtained from electroencephalogram (EEG) signals. A significant obstacle to the development of BCIs based on EEG is the classification of subject-independent motor imagery data since EEG data are very individualized. Deep learning techniques such as the convolutional neural network (CNN) have illustrated their influence on feature extraction to increase classification accuracy. In this paper, we present a multi-branch (five branches) 2D convolutional neural network that employs several hyperparameters for every branch. The proposed model achieved promising results for cross-subject classification and outperformed EEGNet, ShallowConvNet, DeepConvNet, MMCNN, and EEGNet_Fusion on three public datasets. Our proposed model, EEGNet Fusion V2, achieves 89.6% and 87.8% accuracy for the actual and imagined motor activity of the eegmmidb dataset and scores of 74.3% and 84.1% for the BCI IV-2a and IV-2b datasets, respectively. However, the proposed model has a bit higher computational cost, i.e., it takes around 3.5 times more computational time per sample than EEGNet_Fusion....
Motor imagery (MI) electroencephalography (EEG) is natural and comfortable for controllers, and has become a research hotspot in the field of the brain–computer interface (BCI). Exploring the inter-subject MI-BCI performance variation is one of the fundamental problems in MI-BCI application. EEG microstates with high spatiotemporal resolution and multichannel information can represent brain cognitive function. In this paper, four EEG microstates (MS1, MS2, MS3, MS4) were used in the analysis of the differences in the subjects’ MI-BCI performance, and the four microstate feature parameters (the mean duration, the occurrences per second, the time coverage ratio, and the transition probability) were calculated. The correlation between the resting-state EEG microstate feature parameters and the subjects’ MI-BCI performance was measured. Based on the negative correlation of the occurrence of MS1 and the positive correlation of the mean duration of MS3, a resting-state microstate predictor was proposed. Twenty-eight subjects were recruited to participate in our MI experiments to assess the performance of our resting-state microstate predictor. The experimental results show that the average area under curve (AUC) value of our resting-state microstate predictor was 0.83, and increased by 17.9% compared with the spectral entropy predictor, representing that the microstate feature parameters can better fit the subjects’ MI-BCI performance than spectral entropy predictor. Moreover, the AUC of microstate predictor is higher than that of spectral entropy predictor at both the single-session level and average level. Overall, our resting-state microstate predictor can help MI-BCI researchers better select subjects, save time, and promote MI-BCI development....
Background: Although human–robotic interaction is a rapidly burgeoning area of study within education, and social robots are being widely tested for use in schools, few studies have focused on early adolescent interactions with robots under actual classroom conditions. Objectives: We introduced an autonomous, social robot (‘Pepper’) into a projectbased learning environment at a public elementary/middle school in order to see how long-term exposure to a robot in a project-based classroom affected student conceptions of robots. Methods: We conducted unstructured classroom observations, focus-group interviews with students, and took videos of students interacting with the robot at key points in the project.We engaged in joint coding and memo writing to summarize key themes. Results: Our results showed the limitations of these social robots as interactive educational technology, but also revealed the complexity of young adolescent beliefs about robots as social actors. Although current technology limits the ability of robots to be widely deployed in public-school classrooms, skillfully designed interventions using social robots have the potential to motivate and engage students. Takeaways: Exposure to the robot stimulated students to discuss robots as social actors, raised issues about gender identification of artificial agents in the classroom, and stimulated discussion on what constitutes a social being. The initial novelty of the humanoid robot enhanced engagement with the longer-term project and also challenged teachers to be more reflective and flexible in planning the project....
Loading....